Hadoop3集群搭建之 | 您所在的位置:网站首页 › logging initialized using configuration in jar › Hadoop3集群搭建之 |
Hadoop3集群搭建之——虚拟机安装 Hadoop3集群搭建之——安装hadoop,配置环境 Hadoop3集群搭建之——配置ntp服务 Hadoop3集群搭建之——hbase安装及简单操作 现在到hive了。 hive安装比较简单。 下载个包,解压,配置hive-site.xml、hive-env.sh 就好了。 1、下载hive包 官网:http://mirror.bit.edu.cn/apache/hive/hive-2.3.3/ 2、解压到hadoop目录 tar -zxvf apache-hive-2.3.3-bin.tar.gz #解压 mv apache-hive-2.3.3-bin hive2.3.3 #修改目录名,方便使用3、配置hive环境变量 [hadoop@venn05 ~]$ more .bashrc # .bashrc # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi # Uncomment the following line if you don't like systemctl's auto-paging feature: # export SYSTEMD_PAGER= # User specific aliases and functions #jdk export JAVA_HOME=/opt/hadoop/jdk1.8 export JRE_HOME=${JAVA_HOME}/jre export CLASS_PATH=${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH #hadoop export HADOOP_HOME=/opt/hadoop/hadoop3 export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH #hive export HIVE_HOME=/opt/hadoop/hive2.3.3 export HIVE_CONF_DIR=$HIVE_HOME/conf export PATH=$HIVE_HOME/bin:$PATH4、在hdfs上创建hive目录 hive工作目录如下: hive.metastore.warehouse.dir /user/hive/warehouse location of default database for the warehouse hive.exec.scratchdir /tmp/hive HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/;username; is created, with ${hive.scratch.dir.permission}.所以创建如下目录: hadoop fs -mkdir -p /user/hive/warehouse #hive库文件位置hadoop fs -mkdir -p /tmp/hive/ #hive临时目录 #授权hadoop fs -chmod -R 777 /user/hive/warehousehadoop fs -chmod -R 777 /tmp/hive 注:必须授权,不然会报错: Logging initialized using configuration in jar:file:/opt/hadoop/hive2.3.3/lib/hive-common-2.3.3.jar!/hive-log4j2.properties Async: true Exception in thread "main" java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-xr-x at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:720) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:650) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:582) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:549) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:750) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:239) at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
5、修改hive-site.xml cp hive-default.xml.template hive-site.xml vim hive-site.xml修改1: 将hive-site.xml 中的 “${system:java.io.tmpdir}” 都缓存具体目录:/opt/hadoop/hive2.3.3/tmp 4处 修改2: 将hive-site.xml 中的 “${system:user.name}” 都缓存具体目录:root 3处 hive.exec.local.scratchdir ${system:java.io.tmpdir}/${system:user.name} Local scratch space for Hive jobs hive.downloaded.resources.dir ${system:java.io.tmpdir}/${hive.session.id}_resources Temporary local directory for added resources in the remote file system. hive.querylog.location ${system:java.io.tmpdir}/${system:user.name} Location of Hive run time structured log file hive.server2.logging.operation.log.location ${system:java.io.tmpdir}/${system:user.name}/operation_logs Top level directory where operation logs are stored if logging functionality is enabled改为: hive.exec.local.scratchdir /opt/hadoop/hive2.3.3/tmp/root Local scratch space for Hive jobs hive.downloaded.resources.dir /opt/hadoop/hive2.3.3/tmp/${hive.session.id}_resources Temporary local directory for added resources in the remote file system. hive.querylog.location /opt/hadoop/hive2.3.3/tmp/root Location of Hive run time structured log file hive.server2.logging.operation.log.location /opt/hadoop/hive2.3.3/tmp/root/operation_logs Top level directory where operation logs are stored if logging functionality is enabled配置元数据库mysql: mysql> CREATE USER 'hive'@'%' IDENTIFIED BY 'hive'; #创建hive用户 Query OK, 0 rows affected (0.00 sec) mysql> GRANT ALL ON *.* TO 'hive'@'%'; #授权 Query OK, 0 rows affected (0.00 sec) 修改数据库配置:
javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver Driver class name for a JDBC metastore javax.jdo.option.ConnectionURL jdbc:mysql://venn05:3306/hive?createDatabaseIfNotExist=true JDBC connect string for a JDBC metastore. To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL. For example, jdbc:postgresql://myhost/db?ssl=true for postgres database. javax.jdo.option.ConnectionUserName hive Username to use against metastore database javax.jdo.option.ConnectionPassword hive password to use against metastore database
6、修改hive-env.sh cp hive-env.sh.template hive-env.sh vim hive-env.sh在末尾添加如下内容: export HADOOP_HOME=/opt/hadoop/hadoop3 export HIVE_CONF_DIR=/opt/hadoop/hive2.3.3/conf export HIVE_AUX_JARS_PATH=/opt/hadoop/hive2.3.3/lib7、上传mysql驱动包
上传到:$HIVE_HOME/lib
8、初始化hive schematool -initSchema -dbType mysql
9、启动hive hive
搞定
|
CopyRight 2018-2019 实验室设备网 版权所有 |